Wikipedia:Wikipedia Signpost/2023-02-04/Special report
Legal status of Wikimedia projects "unclear" under potential European legislation
In two major English-speaking countries, two separate legal mechanisms are working their way through two separate processes. The first is a United States Supreme Court case regarding §230 of the Communications Decency Act, and the second is a proposed Act of the Parliament of the United Kingdom "intended to improve internet safety". Both have wide-ranging implications for posters, lurkers, and everyone in between, and both have been the subject of fierce debate.
Online Safety Bill
The Online Safety Bill (viewable here) is a proposed Act of Parliament in the United Kingdom. In the last few months, some (including the British Broadcasting Corporation) have been sounding the alarm about the hazards of the web, and the necessity for "proportionate measures" like making website owners "criminally liable for failing to give information to media regulator Ofcom". Others, like Chris Stokel-Walker in the Washington Post, have called the bill a "tangled mess born of political wrangling"; the Electronic Frontier Foundation described it as a "threat to free expression" that "undermines the encryption that we all rely on for security and privacy online". Mike Masnick, writing for TechDirt, says it is "the UK’s latest (in a long line) of attempts to 'Disneyfy' the internet".
While it is already possible for Britons to face jail time over single-retweet posts like "the only good Brit soldier is a deed one, burn auld fella buuuuurn", the proposed Act would broaden the government's power to take action against posts, under broad categories like "when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm".
Nadine Dorries, the UK's former Culture Secretary (a role which encompasses digital responsibility), has said that the bill would "make the UK the safest place in the world to be online while enshrining free speech" by "protecting the most vulnerable from accessing harmful content, and ensuring there is no safe space for terrorists to hide online".
Recently, the Wikimedia Foundation has weighed in on the debate, in the wake of proposed changes to the bill which add provisions that "senior managers at tech firms could face up to two years in jail if they breach new duties to keep children safe online". While the bill has been written with some carve-outs for broadcast media and journalists, the general assumption with regard to websites is that they are businesses run by companies; the legal status of volunteer moderation is unclear. The BBC cites solicitor Neil Brown as saying:
“ | The bill, and the amendment, would impose pages of duties on someone who, for fun, runs their own social media or photo/video sharing server, or hosts a multi-player game which lets players chat or see each other's content or creations. | ” |
Specifically, the bill gives little distinction between "content moderation", carried out at industrial scale by paid employees at large firms like Meta (née Facebook) and Google, and... whatever ArbCom and AN/I are. There aren't handy buzzwords for things that don't scale to a billion users. Reason says that the bill's implications for Wikipedia are "not entirely clear", citing Vice President of Global Advocacy Rebecca MacKinnon's concerns. A blog post by the Wikimedia Policy goes into greater depth:
Wikipedia’s volunteer-driven governance model is what allows all of this to work, since it facilitates decentralized decision-making about content on the website. This model of curation of free and open knowledge is led by volunteers who collaborate to expand the encyclopedia and maintain high quality information that is freely accessible around the world. It depends on strong protections for the right to freedom of expression and privacy, and in turn it furthers the right to participate in culture and science, as well as the right to education.
The Wikimedia Foundation, as the nonprofit host of Wikipedia, along with affiliated organizations such as Wikimedia UK, and the larger movement of volunteers support efforts to make the internet safer. When people are harassed or feel otherwise unsafe communicating online, their ability to access, create or share knowledge is diminished. We believe online safety can only be achieved when adequate safeguards for privacy and freedom of expression are in place.
Unfortunately, however, the UK OSB not only threatens freedom of expression and privacy for readers and volunteers alike, but also threatens Wikipedia’s volunteer-driven governance model. In order to "make the UK the safest place to go online," the legislation seeks to impose numerous duties on platforms hosting user-generated content, including requirements to implement processes to limit or prevent access to illegal or harmful content. Such duties as currently drafted will interfere with the ways that Wikipedia works.
While the OSB as it stands in early November 2022 has been revised to address serious concerns about who has the power to define and order deletion of "lawful but harmful" content affecting adults, many aspects of the OSB remain highly problematic. Chief among those are the failure to protect freedom of expression and community-driven content moderation processes. We are also deeply concerned about the privacy implications of collecting user data for mandatory age verification. With the shared goal of making the internet better and safer for all while also protecting Wikipedia and other Wikimedia projects, we offer our recommendations for revisions of the OSB.
— WMF Policy
Ultimately, it remains to be seen what the broader implications will be of this legislation, or whether it will pass. Government oversight of personal communications has certainly played a role in much of human history, but there isn't much in the way of legal precedent on criminal penalties for "legal but harmful" content written about on websites where editorial decisions are made by groups of volunteer collaborators. It is difficult to imagine a situation in which a volunteer encyclopedia remained accessible in a country where local chapter members faced jail time for its coverage of contentious topics or encyclopedically relevant (but shocking and offensive) illustrations. Of course, in countries that have constitutionally guaranteed freedom of expression, the precedent on this has generally been "leave us alone" – which brings us to the American legislation in this issue – but that is neither here nor there.
"Chat control": big if true
On January 23, Swedish newspaper Svenska Dagbladet published an article titled "Stoppa förslaget om massövervakning i EU" ("Stop the proposal on mass surveillance of the EU"). This article has been making the rounds on the web in the last couple days, having been translated into English and made available a few days ago in a post from Mullvad VPN. It warns of an impending legislative proposal in the European Commission, ostensibly intended to prevent child abuse, that would "monitor and audit the communication of all European Union citizens", including e-mail, instant messaging, and text messages. On February 1, Mullvad went further, in a post saying that the law would "ban open source operating systems".
Certainly, if these predictions are accurate, such measures would pose a major threat to the web as we know it. However, it is unclear precisely what the actual extent or implementation of the proposed legislation would be. The initiative has not been reported on very widely, with most coverage from advocacy organizations.
The initiative has existed for some time, and has received modest coverage, mostly from critics. Patrick Breyer, Pirate Party activist and Member of the European Parliament, has posted on his website about the potential for these "chat control" measures to permit government surveillance of private cloud storage and end online anonymity. Last May, Wired said that it "could undermine end-to-end encryption for billions of people", and in October, the Electronic Frontier Foundation strenuously opposed the measure as an "ineffectual and even harmful" step towards authoritarianism.
The Wikimedia Foundation's feedback on the proposal, from September of last year, says that while "the Foundation supports the European Commission’s goals of fighting child sexual abuse and effectively removing CSAM online, we are concerned that some of the requirements will disproportionately impact smaller or nonprofit platforms through unrealistic deadlines for both content removal and compliance obligations".
The European Commission has information on their website regarding the initiative, called "Fighting child sexual abuse: detection, removal and reporting of illegal content online". Big if true, indeed. But who knows?
What does it mean?
While it is easy to come away from headlines like these with a doomer attitude – and, indeed, doom may be on the menu – it is also crucial to remember that so is hope. The subject of free expression on the Internet has been a political hot potato for decades. The astute reader will recall Signpost coverage of Wikimedia projects' role in the web-wide protests against SOPA and PIPA, two proposed laws in the United States from 2011 that posed similar threats to posting. At that time, the main concern was piracy ("SOPA" and "PIPA" stood for "Stop Online Piracy Act" and "Protect Intellectual Property Act", respectively); while the nature of the issues has changed, and the scope of the debate broadened with the Internet's increasing relevance to daily life, optimism may not be entirely unwarranted here. Or maybe we are all completely hosed. Only time will tell!
Discuss this story